14 research outputs found

    SurReal: enhancing Surgical simulation Realism using style transfer

    Get PDF
    Surgical simulation is an increasingly important element of surgical education. Using simulation can be a means to address some of the significant challenges in developing surgical skills with limited time and resources. The photo-realistic fidelity of simulations is a key feature that can improve the experience and transfer ratio of trainees. In this paper, we demonstrate how we can enhance the visual fidelity of existing surgical simulation by performing style transfer of multi-class labels from real surgical video onto synthetic content. We demonstrate our approach on simulations of cataract surgery using real data labels from an existing public dataset. Our results highlight the feasibility of the approach and also the powerful possibility to extend this technique to incorporate additional temporal constraints and to different applications

    Affordable Mobile-based Simulator for Robotic Surgery

    Full text link
    Robotic surgery and novel surgical instrumentation present great potentials towards safer, more accurate and consistent minimally invasive surgery. However, their adoption is dependent to the access to training facilities and extensive surgical training. Robotic instruments require different dexterity skills compared to open or laparoscopic. Surgeons, therefore, are required to invest significant time by attending extensive training programs. Contrary, hands on experiences represent an additional operational cost for hospitals as the availability of robotic systems for training purposes is limited. All these technological and financial barriers for surgeons and hospitals hinder the adoption of robotic surgery. In this paper, we present a mobile dexterity training kit to develop basic surgical techniques within an affordable setting. The system could be used to train basic surgical gestures and to develop the motor skills needed for manoeuvring robotic instruments. Our work presents the architecture and components needed to create a simulated environment for training sub-tasks as well as a design for portable mobile manipulators that can be used as master controllers of different instruments. A preliminary study results demonstrate usability and skills development with this system.Comment: Hamlyn Symposium on Medical Robotics 201

    DeepPhase: Surgical Phase Recognition in CATARACTS Videos

    Get PDF
    Automated surgical workflow analysis and understanding can assist surgeons to standardize procedures and enhance post-surgical assessment and indexing, as well as, interventional monitoring. Computer-assisted interventional (CAI) systems based on video can perform workflow estimation through surgical instruments' recognition while linking them to an ontology of procedural phases. In this work, we adopt a deep learning paradigm to detect surgical instruments in cataract surgery videos which in turn feed a surgical phase inference recurrent network that encodes temporal aspects of phase steps within the phase classification. Our models present comparable to state-of-the-art results for surgical tool detection and phase recognition with accuracies of 99 and 78% respectively.Comment: 8 pages, 3 figures, 1 table, MICCAI 201

    Intraoperative robotic-assisted large-area high-speed microscopic imaging and intervention

    Get PDF
    Objective: Probe-based confocal endomicroscopy is an emerging high-magnification optical imaging technique that provides in-vivo and in-situ cellular-level imaging for real-time assessment of tissue pathology. Endomicroscopy could potentially be used for intraoperative surgical guidance, but it is challenging to assess a surgical site using individual microscopic images due to the limited field-of-view and difficulties associated with manually manipulating the probe. Methods: In this paper, a novel robotic device for large-area endomicroscopy imaging is proposed, demonstrating a rapid, but highly accurate, scanning mechanism with image-based motion control which is able to generate histology-like endomicroscopy mosaics. The device also includes, for the first time in robotic-assisted endomicroscopy, the capability to ablate tissue without the need for an additional tool. Results: The device achieves pre-programmed trajectories with positioning accuracy of less than 30um, the image-based approach demonstrated that it can suppress random motion disturbances up to 1.25mm/s. Mosaics are presented from a range of ex-vivo human and animal tissues, over areas of more than 3mm², scanned in approximate 10s. Conclusion: This work demonstrates the potential of the proposed instrument to generate large-area, high-resolution microscopic images for intraoperative tissue identification and margin assessment. Significance: This approach presents an important alternative to current histology techniques, significantly reducing the tissue assessment time, while simultaneously providing the capability to mark and ablate suspicious areas intraoperatively

    From Macro to Micro: Autonomous Multiscale Image Fusion for Robotic Surgery

    Get PDF
    In recent years, minimally invasive robotic surgery has shown great promises for enhancing surgical precision and improving patient outcomes. Despite these advances, intraoperative tissue characterisation (such as the identification of cancerous tissue) still relies on traditional biopsy and histology, a process that is time-consuming and often disrupts the normal surgical workflow. In order to provide effective intra-operative decision-making, emerging optical biopsy techniques, such as probe based confocal laser endomicroscopy (pCLE) and optical coherence tomography (OCT), have been developed to provide real-time in vivo, in situ assessment of tissue micro-structures. Clinical deployment of these techniques, however, requires large area surveillance, from macro (mm/cm) to micro (µm) coverage in order to differentiate underlying tissue structures. This article provides a real-time multi-scale fusion scheme for robotic surgery. It demonstrates how the da Vinci surgical robot, used together with the da Vinci Research Kit, can be used for automated 2D scanning of pCLE/OCT probes, providing large area tissue surveillance by image stitching. Open-loop control of the robot provides insufficient precision for probe scanning, and therefore the motion is visually servoed using the live pCLE images (for lateral position) and OCT images (for axial position). The resulting tissue maps can then be fused in real-time with a stereo reconstruction from the laparoscopic video, providing the surgeon with a multi-scale 3D view of the operating site

    En-face optical coherence tomography/fluorescence endomicroscopy for minimally invasive imaging using a robotic scanner

    Get PDF
    We report a compact rigid instrument capable of delivering en-face optical coherence tomography (OCT) images alongside (epi)-fluorescence endomicroscopy (FEM) images by means of a robotic scanning device. Two working imaging channels are included: one for a one-dimensional scanning, forward-viewing OCT probe and another for a fiber bundle used for the FEM system. The robotic scanning system provides the second axis of scanning for the OCT channel while allowing the field of view (FoV) of the FEM channel to be increased by mosaicking. The OCT channel has resolutions of 25  /  60  μm (axial/lateral) and can provide en-face images with an FoV of 1.6  ×  2.7  mm2. The FEM channel has a lateral resolution of better than 8  μm and can generate an FoV of 0.53  ×  3.25  mm2 through mosaicking. The reproducibility of the scanning was determined using phantoms to be better than the lateral resolution of the OCT channel. Combined OCT and FEM imaging were validated with ex-vivo ovine and porcine tissues, with the instrument mounted on an arm to ensure constant contact of the probe with the tissue. The OCT imaging system alone was validated for in-vivo human dermal imaging with the handheld instrument. In both cases, the instrument was capable of resolving fine features such as the sweat glands in human dermal tissue and the alveoli in porcine lung tissue

    Robotics for surgical microscopy

    No full text
    Advances in surgery have had a significant impact on cancer treatment and management. Recurrence, however, is still a major issue, and is often associated with incomplete tumour removal. Thus far, histopathological examination is still the “gold standard” for assessing tumour resection completeness. However, it is operator-dependent and too slow for intraoperative use. Recently developed endomicroscopy techniques enable the acquisition of high resolution images at a cellular level in situ, in vivo, thus significantly extending the information content available intraoperatively. The miniaturised imaging probes incorporate flexible fibre bundles and allow the ease of integration with surgical instruments. However, manual control of these probes is challenging, particularly in terms of maintaining consistent tissue contact and performing large area surveillance of complex, deformable, 3D structures. This thesis explores the use of surgical robots and robotically-assisted probe manipulation to provide stable, precise, consistent and dexterous manipulation of endomicroscopy probes for surgical applications. Following a discussion of image enhancement techniques, a first approach towards robotically-assisted probe manipulation using existing surgical robotic platforms is demonstrated in the form of multi-purpose, pick-up probes. They also incorporate novel force adaptive mechanisms for consistent tissue contact. The development of bespoke, mechatronically-enhanced robotic devices is then presented. Firstly, a handheld robotic scanning device is proposed for breast conserving surgery, allowing accurate, high speed scanning over wide deformable tissue areas. An energy delivery fibre is integrated into the scanning mechanism for image-guided ablation or intraoperative marking of tumour margins. Secondly, a dexterous 5-degree-of-freedom robotic instrument is proposed for use in endoluminal microsurgeries. The instrument offers increased flexibility and by using a master-slave control scheme, we demonstrate how efficient, large area scanning over curved endoluminal surfaces can be performed. Finally, the fusion of ultrasound imaging with endomicroscopy is investigated through the development of a robotically-actuated articulated instrument for multi-modality image fusion.Open Acces

    Color reflectance fiber bundle endomicroscopy without back-reflections

    No full text
    Coherent fiber imaging bundles can be used as passive probes for reflectance-mode endomicroscopy providing that the back-reflections from the fiber ends are efficiently rejected. We describe an approach specific to widefield endomicroscopy in which light is injected into a leached fiber bundle near the distal end, thereby avoiding reflections from the proximal face. We use this method to demonstrate the color widefield reflectance endomicroscopy of ex vivo animal tissue

    Autonomous scanning for endomicroscopic mosaicing and 3D fusion

    No full text
    Robot-assisted minimally invasive surgery can benefit from the automation of common, repetitive or well-defined but ergonomically difficult tasks. One such task is the scanning of a pick-up endomicroscopy probe over a complex, undulating tissue surface to enhance the effective field-of-view through video mosaicing. In this paper, the da Vinci® surgical robot, through the dVRK framework, is used for autonomous scanning and 2D mosaicing over a user-defined region of interest. To achieve the level of precision required for high quality mosaic generation, which relies on sufficient overlap between consecutive image frames, visual servoing is performed using a combination of a tracking marker attached to the probe and the endomicroscopy images themselves. The resulting sub-millimetre accuracy of the probe motion allows for the generation of large mosaics with minimal intervention from the surgeon. Images are streamed from the endomicroscope and overlaid live onto the surgeons view, while 2D mosaics are generated in real-time, and fused into a 3D stereo reconstruction of the surgical scene, thus providing intuitive visualisation and fusion of the multi-scale images. The system therefore offers significant potential to enhance surgical procedures, by providing the operator with cellular-scale information over a larger area than could typically be achieved by manual scanning

    A miniaturised robotic probe for real-time intraoperative fusion of ultrasound and endomicroscopy

    No full text
    Transanal Endoscopic Microsurgery (TEM) is a minimally invasive oncological resection procedure that utilises a natural orifice approach rather than the traditional abdominal or open approach. However, TEM has a significant recurrence rate due to incomplete excisions, which can possibly be attributed to the absence of intraoperative image guidance. The use of real-time histological data could allow the surgeons to assess the surgical margins intraoperatively and adjust the procedure accordingly. This paper presents the integration of endomicroscopy and ultrasound imaging through a robotically actuated instrument. Endomicroscopy can provide high resolution images at a surface level while ultrasound provides depth resolved information at a macroscopic level. Endomicroscopy scanning is achieved with a novel scanning approach featuring a passive force adaptive mechanism. The instrument is manipulated across the surgical workspace through an articulated flexible shaft. This results in the ability to perform large area mosaics coupled with ultrasound scanning. In addition, the use of endoscopic tracking is demonstrated, allowing three-dimensional reconstruction of the ultrasound data displayed onto the endoscopic view. An ex vivo study on porcine colon tissue has been performed, demonstrating the clinical applicability of the instrument
    corecore